23 research outputs found

    Enhancement of fault injection techniques based on the modification of VHDL code

    Full text link
    Deep submicrometer devices are expected to be increasingly sensitive to physical faults. For this reason, fault-tolerance mechanisms are more and more required in VLSI circuits. So, validating their dependability is a prior concern in the design process. Fault injection techniques based on the use of hardware description languages offer important advantages with regard to other techniques. First, as this type of techniques can be applied during the design phase of the system, they permit reducing the time-to-market. Second, they present high controllability and reachability. Among the different techniques, those based on the use of saboteurs and mutants are especially attractive due to their high fault modeling capability. However, implementing automatically these techniques in a fault injection tool is difficult. Especially complex are the insertion of saboteurs and the generation of mutants. In this paper, we present new proposals to implement saboteurs and mutants for models in VHDL which are easy-to-automate, and whose philosophy can be generalized to other hardware description languages.Baraza Calvo, JC.; Gracia-Morán, J.; Blanc Clavero, S.; Gil Tomás, DA.; Gil Vicente, PJ. (2008). Enhancement of fault injection techniques based on the modification of VHDL code. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 16(6):693-706. doi:10.1109/TVLSI.2008.2000254S69370616

    Injecting Intermittent Faults for the Dependability Assessment of a Fault-Tolerant Microcomputer System

    Full text link
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.As scaling is more and more aggressive, intermittent faults are increasing their importance in current deep submicron complementary metal-oxide-semiconductor (CMOS) technologies. This work shows the dependability assessment of a fault-tol- erant computer system against intermittent faults. The applied methodology lies in VHDL-based fault injection, which allows the assessment in early design phases, together with a high level of observability and controllability. The evaluated system is a duplex microcontroller system with cold stand-by sparing. A wide set of intermittent fault models have been injected, and from the simulation traces, coverages and latencies have been measured. Markov models for this system have been generated and some dependability functions, such as reliability and safety, have been calculated. From these results, some enhancements of detection and recovery mechanisms have been suggested. The methodology presented is general to any fault-tolerant computer system.This work was supported in part by the Universitat Politecnica de Valencia under the Research Project SP20120806, and in part by the Spanish Government under the Research Project TIN2012-38308-C02-01. Associate Editor: J. Shortle.Gil Tomás, DA.; Gracia Morán, J.; Baraza Calvo, JC.; Saiz Adalid, LJ.; Gil Vicente, PJ. (2016). Injecting Intermittent Faults for the Dependability Assessment of a Fault-Tolerant Microcomputer System. IEEE Transactions on Reliability. 65(2):648-661. https://doi.org/10.1109/TR.2015.2484058S64866165

    Studying the effects of intermittent faults on a microcontroller

    Full text link
    As CMOS technology scales to the nanometer range, designers have to deal with a growing number and variety of fault types. Particularly, intermittent faults are expected to be an important issue in modern VLSI circuits. The complexity of manufacturing processes, producing residues and parameter variations, together with special aging mechanisms, may increase the presence of such faults. This work presents a case study of the impact of intermittent faults on the behavior of a commercial microcontroller. In order to carry out an exhaustive reliability assessment, the methodology used lies in VHDL-based fault injection technique. In this way, a set of intermittent fault models at logic and register transfer abstraction levels have been generated and injected in the VHDL model of the system. From the simulation traces, the occurrences of failures and latent errors have been logged. The impact of intermittent faults has been also compared to that got when injecting transient and permanent faults. Finally, some injection experiments have been reproduced in a RISC microprocessor and compared with those of the microcontroller. © 2012 Elsevier Ltd. All rights reserved.This work has been funded by the Spanish Government under the Research Project TIN2009-13825.Gil Tomás, DA.; Gracia-Morán, J.; Baraza Calvo, JC.; Saiz-Adalid, L.; Gil Vicente, PJ. (2012). Studying the effects of intermittent faults on a microcontroller. Microelectronics Reliability. 52(11):2837-2846. https://doi.org/10.1016/j.microrel.2012.06.004S28372846521

    Obras de Gil Vicente

    No full text
    T.1, (400 p.

    Reducing the Overhead of BCH Codes: New Double Error Correction Codes

    No full text
    The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-known class of powerful error correction cyclic codes. BCH codes can correct multiple errors with minimal redundancy. Primitive BCH codes only exist for some word lengths, which do not frequently match those employed in digital systems. This paper focuses on double error correction (DEC) codes for word lengths that are in powers of two (8, 16, 32, and 64), which are commonly used in memories. We also focus on hardware implementations of the encoder and decoder circuits for very fast operations. This work proposes new low redundancy and reduced overhead (LRRO) DEC codes, with the same redundancy as the equivalent BCH DEC codes, but whose encoder, and decoder circuits present a lower overhead (in terms of propagation delay, silicon area usage and power consumption). We used a methodology to search parity check matrices, based on error patterns, in order to design the new codes. We implemented and synthesized them, and compared their results with those obtained for the BCH codes. Our implementation of the decoder circuits achieved reductions between 2.8% and 8.7% in the propagation delay, between 1.3% and 3.0% in the silicon area, and between 15.7% and 26.9% in the power consumption. Therefore, we propose LRRO codes as an alternative for protecting information against multiple errors

    Proposal of an Adaptive Fault Tolerance Mechanism to Tolerate Intermittent Faults in RAM

    No full text
    Due to transistor shrinking, intermittent faults are a major concern in current digital systems. This work presents an adaptive fault tolerance mechanism based on error correction codes (ECC), able to modify its behavior when the error conditions change without increasing the redundancy. As a case example, we have designed a mechanism that can detect intermittent faults and swap from an initial generic ECC to a specific ECC capable of tolerating one intermittent fault. We have inserted the mechanism in the memory system of a 32-bit RISC processor and validated it by using VHDL simulation-based fault injection. We have used two (39, 32) codes: a single error correction–double error detection (SEC–DED) and a code developed by our research group, called EPB3932, capable of correcting single errors and double and triple adjacent errors that include a bit previously tagged as error-prone. The results of injecting transient, intermittent, and combinations of intermittent and transient faults show that the proposed mechanism works properly. As an example, the percentage of failures and latent errors is 0% when injecting a triple adjacent fault after an intermittent stuck-at fault. We have synthesized the adaptive fault tolerance mechanism proposed in two types of FPGAs: non-reconfigurable and partially reconfigurable. In both cases, the overhead introduced is affordable in terms of hardware, time and power consumption

    On-line Vocational Training for Computer-Based Safety and Security using Competence- and Work-Based Learning.

    Full text link
    [EN] Nowadays, computer systems are present in almost all areas of life. However, it is very difficult to guarantee a determined Safety and Security level. Weak or incorrectly deployed Safety and Security policies may lead to unaffordable economical, human or reputation loses. However, current undergraduate programs rarely address Safety and Security in depth, and the definition of specific competences required for Safety and Security engineers is difficult, since actual and future necessities of ICT professionals should be considered. Work-Based Learning (WBL) can be used for professionals’ vocational training, as it promotes the acquisition of knowledge, as well as the development of skills. However, training professionals implies working with learners with high potential mobility, limited amount of time and heterogeneous prior learning. So, an on-line approach should be used. But, how the competences acquired by professionals can be recognized across Europe? The European Credit for Vocational Education and Training (ECVET) framework eases the transfer and recognition of the resulting competences among different countries and educational contexts. In this work, the RISKY project is presented. Funded by the European Union through the Leonardo da Vinci programme, this project tries to improve Safety and Security professionals’ training. To do this, it will use an on-line approach, based on competences and WBL.[ES] Actualmente, los sistemas informáticos están presentes en casi todos los ámbitos de la vida. Sin embargo, es muy difícil garantizar un determinado nivel de Seguridad e Inocuidad. Políticas débiles o mal implementadas pueden conducir a enormes pérdidas económicas, de vidas humanas o de reputación. Sin embargo, dichos aspectos no son abordados en profundidad durante la formación universitaria. Además, la definición de las competencias específicas en Seguridad e Inocuidad requeridas por los ingenieros en TIC es muy difícil, ya que se deben considerar sus necesidades actuales y futuras. El aprendizaje basado en el trabajo (del inglés Work-Based Learning o WBL) es una técnica muy útil para la formación de profesionales, ya que promueve la adquisición de conocimiento, así como el de sarrollo de habilidades. Pero la formación de profesionales implica trabajar con aprendices con una gran movilidad potencial, una cantidad limitada de tiempo y unos conocimientos previos bastante heterogéneos. Así, pues, se debe usar una aproximación on-line. Pero, ¿cómo pueden ser reconocidas las competencias adquiridas por un profesional en toda Europa? El entorno de trabajo European Credit for Vocational Education and Training (ECVET) facilita la transferencia y el reconocimiento de las competencias adquiridas en diferentes países y contextos educacionales. En este trabajo se presenta el proyecto RISKY. Financiado por la Unión Europea a través del programa Leonardo da Vinci, este proyecto pretende mejorar la formación de profesionales en Seguridad e Inocuidad. Para ello, utilizará un enfoque on-line, basado en competencias y WBL.This work has been funded by the RISKY Leonardo da Vinci project (#2011-1-FR1-LEO05-24482) from the European Commission. This publication reflects the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein.Gracia-Morán, J.; Ruiz García, JC.; Andrés Martínez, DD.; Baraza Calvo, JC.; Gil Vicente, PJ. (2014). Formación on-line para profesionales en el campo de la Seguridad y la Inocuidad Informática a través de Competencias y Aprendizaje Basado en el Trabajo. Revista de Formaciónn e Innovación Educativa Universitaria. 7(3):143-154. http://hdl.handle.net/10251/46460S1431547

    An Aspect-Oriented Approach to Hardware Fault Tolerance for Embedded Systems

    Full text link
    The steady reduction of transistor size has brought embedded solutions into everyday life. However, the same features of deep-submicron technologies that are increasing the application spectrum of these solutions are also negatively affecting their dependability. Current practices for the design and deployment of hardware fault tolerance and security strategies remain in practice specific (defined on a case-per-case basis) and mostly manual and error prone. Aspect orientation, which already promotes a clear separation between functional and non-functional (dependability and security) concerns in software designs, is also an approach with a big potential at the hardware level. This chapter addresses the challenging problems of engineering such strategies in a generic way via metaprogramming, and supporting their subsequent instantiation and deployment on specific hardware designs through open compilation. This shows that promoting a clear separation of concerns in hardware designs and producing a library of generic, but reusable, hardware fault and intrusion tolerance mechanisms is a feasible reality today.Andrés Martínez, DD.; Ruiz García, JC.; Espinosa García, J.; Gil Vicente, PJ. (2014). An Aspect-Oriented Approach to Hardware Fault Tolerance for Embedded Systems. En Handbook of Research on Embedded Systems Design. 123-149. doi:10.4018/978-1-4666-6194-3S12314

    Feasibility and results of an intensive cardiac rehabilitation program. Insights from the MxM (Más por Menos) randomized trial

    No full text
    Introduccion y objetivos: Los programas de rehabilitacion cardiaca (PRC) engloban intervenciones encaminadas a mejorar el pronostico de la enfermedad cardiovascular influyendo en la condicin fısica, mental y social de los pacientes, pero no se conoce su duracion optima. Nuestro objetivo es comparar los resultados de un PRC estandar frente a otro intensivo mas breve tras un sındrome coronario agudo, mediante el estudio Mas por Menos. Metodos: Diseño prospectivo, aleatorizado, abierto, enmascarado a los evaluadores de eventos y multicentrico (PROBE). Se aleatorizoa los pacientes al PRC estandar de 8 semanas u otro intensivo de 2 semanas con sesiones de refuerzo. Se realizo una visita final 12 meses despues, tras la finalizacion del programa. Se evaluo: adherencia a la dieta, esfera psicologica, habito tabaquico, tratamiento farmacologico, capacidad funcional, calidad de vida, parametros cardiometabolicos y antropometricos, eventos cardiovasculares y mortalidad por cualquier causa durante el seguimiento. Resultados: Se analizoa 497 pacientes (media de edad, 57,8 10,0 an ̃ os; el 87,3% varones; programa intensivo, n = 262; estandar, n = 235). Las caracteristicas basales de ambos grupos eran similares. Al año, mas del 93% habıa mejorado en al menos 1 MET el resultado de la ergometría. Además, la adherencia a la dieta mediterranea y la calidad de vida mejoraron significativamente con el PRC, sin diferencias significativas entre grupos. Los eventos cardiovasculares ocurrieron de manera similar en ambos grupos. Conclusiones: La PRC intensiva podrıa ser tan efectiva como la PRC estándar en lograr la adherencia a las medidas de prevencio n secundaria y ser una alternativa para algunos pacientes y centros.Introduction and objectives: Cardiac rehabilitation programs (CRP) are a set of interventions to improve the prognosis of cardiovascular disease by influencing patients’ physical, mental, and social conditions. However, there are no studies evaluating the optimal duration of these programs. We aimed to compare the results of a standard vs a brief intensive CRP in patients after ST-segment elevation and non–ST- segment elevation acute coronary syndrome through the Ma ́s por Menos study (More Intensive Cardiac Rehabilitation Programs in Less Time). Methods: In this prospective, randomized, open, evaluator-blind for end-point, and multicenter trial (PROBE design), patients were randomly allocated to either standard 8-week CRP or intensive 2-week CRP with booster sessions. A final visit was performed 12 months later, after completion of the program. We assessed adherence to the Mediterranean diet, psychological status, smoking, drug therapy, functional capacity, quality of life, cardiometabolic and anthropometric parameters, cardiovascular events, and all-cause mortality during follow-up. Results: A total of 497 patients (mean age, 57.8 10.0 years; 87.3% men) were finally assessed (intensive: n = 262; standard: n = 235). Baseline characteristics were similar between the 2 groups. At 12 months, the results of treadmill ergometry improved by 1 MET in 93% of the patients. In addition, adherence to the Mediterranean diet and quality of life were significantly improved by CRP, with no significant differences between the groups. The occurrence of cardiovascular events was similar in the 2 groups. Conclusions: Intensive CRP could be as effective as standard CRP in achieving adherence to recommended secondary prevention measures after acute coronary syndrome and could be an alternative for some patients and centers
    corecore